Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 101.182
1.
PLoS One ; 19(5): e0299255, 2024.
Article En | MEDLINE | ID: mdl-38722923

Despite the huge importance that the centrality metrics have in understanding the topology of a network, too little is known about the effects that small alterations in the topology of the input graph induce in the norm of the vector that stores the node centralities. If so, then it could be possible to avoid re-calculating the vector of centrality metrics if some minimal changes occur in the network topology, which would allow for significant computational savings. Hence, after formalising the notion of centrality, three of the most basic metrics were herein considered (i.e., Degree, Eigenvector, and Katz centrality). To perform the simulations, two probabilistic failure models were used to describe alterations in network topology: Uniform (i.e., all nodes can be independently deleted from the network with a fixed probability) and Best Connected (i.e., the probability a node is removed depends on its degree). Our analysis suggests that, in the case of degree, small variations in the topology of the input graph determine small variations in Degree centrality, independently of the topological features of the input graph; conversely, both Eigenvector and Katz centralities can be extremely sensitive to changes in the topology of the input graph. In other words, if the input graph has some specific features, even small changes in the topology of the input graph can have catastrophic effects on the Eigenvector or Katz centrality.


Algorithms , Computer Simulation , Models, Theoretical , Models, Statistical , Probability
2.
BMC Med Res Methodol ; 24(1): 107, 2024 May 09.
Article En | MEDLINE | ID: mdl-38724889

BACKGROUND: Semiparametric survival analysis such as the Cox proportional hazards (CPH) regression model is commonly employed in endometrial cancer (EC) study. Although this method does not need to know the baseline hazard function, it cannot estimate event time ratio (ETR) which measures relative increase or decrease in survival time. To estimate ETR, the Weibull parametric model needs to be applied. The objective of this study is to develop and evaluate the Weibull parametric model for EC patients' survival analysis. METHODS: Training (n = 411) and testing (n = 80) datasets from EC patients were retrospectively collected to investigate this problem. To determine the optimal CPH model from the training dataset, a bi-level model selection with minimax concave penalty was applied to select clinical and radiomic features which were obtained from T2-weighted MRI images. After the CPH model was built, model diagnostic was carried out to evaluate the proportional hazard assumption with Schoenfeld test. Survival data were fitted into a Weibull model and hazard ratio (HR) and ETR were calculated from the model. Brier score and time-dependent area under the receiver operating characteristic curve (AUC) were compared between CPH and Weibull models. Goodness of the fit was measured with Kolmogorov-Smirnov (KS) statistic. RESULTS: Although the proportional hazard assumption holds for fitting EC survival data, the linearity of the model assumption is suspicious as there are trends in the age and cancer grade predictors. The result also showed that there was a significant relation between the EC survival data and the Weibull distribution. Finally, it showed that Weibull model has a larger AUC value than CPH model in general, and it also has smaller Brier score value for EC survival prediction using both training and testing datasets, suggesting that it is more accurate to use the Weibull model for EC survival analysis. CONCLUSIONS: The Weibull parametric model for EC survival analysis allows simultaneous characterization of the treatment effect in terms of the hazard ratio and the event time ratio (ETR), which is likely to be better understood. This method can be extended to study progression free survival and disease specific survival. TRIAL REGISTRATION: ClinicalTrials.gov NCT03543215, https://clinicaltrials.gov/ , date of registration: 30th June 2017.


Endometrial Neoplasms , Magnetic Resonance Imaging , Proportional Hazards Models , Humans , Female , Endometrial Neoplasms/mortality , Endometrial Neoplasms/diagnostic imaging , Middle Aged , Magnetic Resonance Imaging/methods , Retrospective Studies , Survival Analysis , Aged , ROC Curve , Adult , Models, Statistical , Radiomics
3.
Trials ; 25(1): 312, 2024 May 09.
Article En | MEDLINE | ID: mdl-38725072

BACKGROUND: Clinical trials often involve some form of interim monitoring to determine futility before planned trial completion. While many options for interim monitoring exist (e.g., alpha-spending, conditional power), nonparametric based interim monitoring methods are also needed to account for more complex trial designs and analyses. The upstrap is one recently proposed nonparametric method that may be applied for interim monitoring. METHODS: Upstrapping is motivated by the case resampling bootstrap and involves repeatedly sampling with replacement from the interim data to simulate thousands of fully enrolled trials. The p-value is calculated for each upstrapped trial and the proportion of upstrapped trials for which the p-value criteria are met is compared with a pre-specified decision threshold. To evaluate the potential utility for upstrapping as a form of interim futility monitoring, we conducted a simulation study considering different sample sizes with several different proposed calibration strategies for the upstrap. We first compared trial rejection rates across a selection of threshold combinations to validate the upstrapping method. Then, we applied upstrapping methods to simulated clinical trial data, directly comparing their performance with more traditional alpha-spending and conditional power interim monitoring methods for futility. RESULTS: The method validation demonstrated that upstrapping is much more likely to find evidence of futility in the null scenario than the alternative across a variety of simulations settings. Our three proposed approaches for calibration of the upstrap had different strengths depending on the stopping rules used. Compared to O'Brien-Fleming group sequential methods, upstrapped approaches had type I error rates that differed by at most 1.7% and expected sample size was 2-22% lower in the null scenario, while in the alternative scenario power fluctuated between 15.7% lower and 0.2% higher and expected sample size was 0-15% lower. CONCLUSIONS: In this proof-of-concept simulation study, we evaluated the potential for upstrapping as a resampling-based method for futility monitoring in clinical trials. The trade-offs in expected sample size, power, and type I error rate control indicate that the upstrap can be calibrated to implement futility monitoring with varying degrees of aggressiveness and that performance similarities can be identified relative to considered alpha-spending and conditional power futility monitoring methods.


Clinical Trials as Topic , Computer Simulation , Medical Futility , Research Design , Humans , Clinical Trials as Topic/methods , Sample Size , Data Interpretation, Statistical , Models, Statistical , Treatment Outcome
4.
Syst Rev ; 13(1): 128, 2024 May 09.
Article En | MEDLINE | ID: mdl-38725074

BACKGROUND: Binary outcomes are likely the most common in randomized controlled trials, but ordinal outcomes can also be of interest. For example, rather than simply collecting data on diseased versus healthy study subjects, investigators may collect information on the severity of disease, with no disease, mild, moderate, and severe disease as possible levels of the outcome. While some investigators may be interested in all levels of the ordinal variable, others may combine levels that are not of particular interest. Therefore, when research synthesizers subsequently conduct a network meta-analysis on a network of trials for which an ordinal outcome was measured, they may encounter a network in which outcome categorization varies across trials. METHODS: The standard method for network meta-analysis for an ordinal outcome based on a multinomial generalized linear model is not designed to accommodate the multiple outcome categorizations that might occur across trials. In this paper, we propose a network meta-analysis model for an ordinal outcome that allows for multiple categorizations. The proposed model incorporates the partial information provided by trials that combine levels through modification of the multinomial likelihoods of the affected arms, allowing for all available data to be considered in estimation of the comparative effect parameters. A Bayesian fixed effect model is used throughout, where the ordinality of the outcome is accounted for through the use of the adjacent-categories logit link. RESULTS: We illustrate the method by analyzing a real network of trials on the use of antibiotics aimed at preventing liver abscesses in beef cattle and explore properties of the estimates of the comparative effect parameters through simulation. We find that even with the categorization of the levels varying across trials, the magnitudes of the biases are relatively small and that under a large sample size, the root mean square errors become small as well. CONCLUSIONS: Our proposed method to conduct a network meta-analysis for an ordinal outcome when the categorization of the outcome varies across trials, which utilizes the adjacent-categories logit link, performs well in estimation. Because the method considers all available data in a single estimation, it will be particularly useful to research synthesizers when the network of interest has only a limited number of trials for each categorization of the outcome.


Network Meta-Analysis , Humans , Randomized Controlled Trials as Topic , Outcome Assessment, Health Care , Models, Statistical
5.
Sci Rep ; 14(1): 10775, 2024 05 11.
Article En | MEDLINE | ID: mdl-38730261

Accurate short-term predictions of COVID-19 cases with empirical models allow Health Officials to prepare for hospital contingencies in a two-three week window given the delay between case reporting and the admission of patients in a hospital. We investigate the ability of Gompertz-type empiric models to provide accurate prediction up to two and three weeks to give a large window of preparation in case of a surge in virus transmission. We investigate the stability of the prediction and its accuracy using bi-weekly predictions during the last trimester of 2020 and 2021. Using data from 2020, we show that understanding and correcting for the daily reporting structure of cases in the different countries is key to accomplish accurate predictions. Furthermore, we found that filtering out predictions that are highly unstable to changes in the parameters of the model, which are roughly 20%, reduces strongly the number of predictions that are way-off. The method is then tested for robustness with data from 2021. We found that, for this data, only 1-2% of the one-week predictions were off by more than 50%. This increased to 3% for two-week predictions, and only for three-week predictions it reached 10%.


COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , COVID-19/virology , Humans , SARS-CoV-2/isolation & purification , Time Factors , Models, Statistical
6.
Accid Anal Prev ; 202: 107612, 2024 Jul.
Article En | MEDLINE | ID: mdl-38703590

The paper presents an exploratory study of a road safety policy index developed for Norway. The index consists of ten road safety measures for which data on their use from 1980 to 2021 are available. The ten measures were combined into an index which had an initial value of 50 in 1980 and increased to a value of 185 in 2021. To assess the application of the index in evaluating the effects of road safety policy, negative binomial regression models and multivariate time series models were developed for traffic fatalities, fatalities and serious injuries, and all injuries. The coefficient for the policy index was negative, indicating the road safety policy has contributed to reducing the number of fatalities and injuries. The size of this contribution can be estimated by means of at least three estimators that do not always produce identical values. There is little doubt about the sign of the relationship: a stronger road safety policy (as indicated by index values) is associated with a larger decline in fatalities and injuries. A precise quantification is, however, not possible. Different estimators of effect, all of which can be regarded as plausible, yield different results.


Accidents, Traffic , Safety , Accidents, Traffic/mortality , Accidents, Traffic/prevention & control , Accidents, Traffic/statistics & numerical data , Humans , Norway , Wounds and Injuries/prevention & control , Wounds and Injuries/mortality , Wounds and Injuries/epidemiology , Public Policy , Models, Statistical , Regression Analysis , Automobile Driving/legislation & jurisprudence , Automobile Driving/statistics & numerical data
7.
BMC Public Health ; 24(1): 1307, 2024 May 14.
Article En | MEDLINE | ID: mdl-38745217

BACKGROUND: In Guangdong Province, China, there is lack of information on the HIV epidemic among high-risk groups and the general population, particularly in relation to sexual transmission, which is a predominant route. The new HIV infections each year is also uncertain owing to HIV transmission from men who have sex with men (MSM) to women, as a substantial proportion of MSM also have female sexual partnerships to comply with social demands in China. METHODS: A deterministic compartmental model was developed to predict new HIV infections in four risk groups, including heterosexual men and women and low- and high-risk MSM, in Guangdong Province from 2016 to 2050, considering HIV transmission from MSM to women. The new HIV infections and its 95% credible interval (CrI) were predicted. An adaptive sequential Monte Carlo method for approximate Bayesian computation (ABC-SMC) was used to estimate the unknown parameter, a mixing index. We calibrated our results based on new HIV diagnoses and proportions of late diagnoses. The Morris and Sobol methods were applied in the sensitivity analysis. RESULTS: New HIV infections increased during and 2 years after the COVID-19 pandemic, then declined until 2050. New infections rose from 8,828 [95% credible interval (CrI): 6,435-10,451] in 2016 to 9,652 (95% CrI: 7,027-11,434) in 2019, peaking at 11,152 (95% CrI: 8,337-13,062) in 2024 before declining to 7,084 (95% CrI: 5,165-8,385) in 2035 and 4,849 (95% CrI: 3,524-5,747) in 2050. Women accounted for approximately 25.0% of new HIV infections, MSM accounted for 40.0% (approximately 55.0% of men), and high-risk MSM accounted for approximately 25.0% of the total. The ABC-SMC mixing index was 0.504 (95% CrI: 0.239-0.894). CONCLUSIONS: Given that new HIV infections and the proportion of women were relatively high in our calibrated model, to some extent, the HIV epidemic in Guangdong Province remains serious, and services for HIV prevention and control are urgently needed to return to the levels before the COVID-19 epidemic, especially in promoting condom-based safe sex and increasing awareness of HIV prevention to general population.


COVID-19 , HIV Infections , Humans , China/epidemiology , HIV Infections/epidemiology , HIV Infections/transmission , HIV Infections/prevention & control , Male , Female , COVID-19/epidemiology , COVID-19/prevention & control , Bayes Theorem , Homosexuality, Male/statistics & numerical data , Adult , Models, Statistical
8.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Article En | MEDLINE | ID: mdl-38714936

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Bayes Theorem , Clinical Trials as Topic , Humans , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size , Data Interpretation, Statistical , Models, Statistical
9.
Biometrics ; 80(2)2024 Mar 27.
Article En | MEDLINE | ID: mdl-38742906

Semicompeting risks refer to the phenomenon that the terminal event (such as death) can censor the nonterminal event (such as disease progression) but not vice versa. The treatment effect on the terminal event can be delivered either directly following the treatment or indirectly through the nonterminal event. We consider 2 strategies to decompose the total effect into a direct effect and an indirect effect under the framework of mediation analysis in completely randomized experiments by adjusting the prevalence and hazard of nonterminal events, respectively. They require slightly different assumptions on cross-world quantities to achieve identifiability. We establish asymptotic properties for the estimated counterfactual cumulative incidences and decomposed treatment effects. We illustrate the subtle difference between these 2 decompositions through simulation studies and two real-data applications in the Supplementary Materials.


Computer Simulation , Humans , Models, Statistical , Risk , Randomized Controlled Trials as Topic/statistics & numerical data , Mediation Analysis , Treatment Outcome , Biometry/methods
10.
Biometrics ; 80(2)2024 Mar 27.
Article En | MEDLINE | ID: mdl-38742907

We propose a new non-parametric conditional independence test for a scalar response and a functional covariate over a continuum of quantile levels. We build a Cramer-von Mises type test statistic based on an empirical process indexed by random projections of the functional covariate, effectively avoiding the "curse of dimensionality" under the projected hypothesis, which is almost surely equivalent to the null hypothesis. The asymptotic null distribution of the proposed test statistic is obtained under some mild assumptions. The asymptotic global and local power properties of our test statistic are then investigated. We specifically demonstrate that the statistic is able to detect a broad class of local alternatives converging to the null at the parametric rate. Additionally, we recommend a simple multiplier bootstrap approach for estimating the critical values. The finite-sample performance of our statistic is examined through several Monte Carlo simulation experiments. Finally, an analysis of an EEG data set is used to show the utility and versatility of our proposed test statistic.


Computer Simulation , Models, Statistical , Monte Carlo Method , Humans , Electroencephalography/statistics & numerical data , Data Interpretation, Statistical , Biometry/methods , Statistics, Nonparametric
11.
AAPS J ; 26(3): 53, 2024 Apr 23.
Article En | MEDLINE | ID: mdl-38722435

The standard errors (SE) of the maximum likelihood estimates (MLE) of the population parameter vector in nonlinear mixed effect models (NLMEM) are usually estimated using the inverse of the Fisher information matrix (FIM). However, at a finite distance, i.e. far from the asymptotic, the FIM can underestimate the SE of NLMEM parameters. Alternatively, the standard deviation of the posterior distribution, obtained in Stan via the Hamiltonian Monte Carlo algorithm, has been shown to be a proxy for the SE, since, under some regularity conditions on the prior, the limiting distributions of the MLE and of the maximum a posterior estimator in a Bayesian framework are equivalent. In this work, we develop a similar method using the Metropolis-Hastings (MH) algorithm in parallel to the stochastic approximation expectation maximisation (SAEM) algorithm, implemented in the saemix R package. We assess this method on different simulation scenarios and data from a real case study, comparing it to other SE computation methods. The simulation study shows that our method improves the results obtained with frequentist methods at finite distance. However, it performed poorly in a scenario with the high variability and correlations observed in the real case study, stressing the need for calibration.


Algorithms , Computer Simulation , Monte Carlo Method , Nonlinear Dynamics , Uncertainty , Likelihood Functions , Bayes Theorem , Humans , Models, Statistical
12.
Sci Rep ; 14(1): 9962, 2024 04 30.
Article En | MEDLINE | ID: mdl-38693172

The COVID-19 pandemic caused by the novel SARS-COV-2 virus poses a great risk to the world. During the COVID-19 pandemic, observing and forecasting several important indicators of the epidemic (like new confirmed cases, new cases in intensive care unit, and new deaths for each day) helped prepare the appropriate response (e.g., creating additional intensive care unit beds, and implementing strict interventions). Various predictive models and predictor variables have been used to forecast these indicators. However, the impact of prediction models and predictor variables on forecasting performance has not been systematically well analyzed. Here, we compared the forecasting performance using a linear mixed model in terms of prediction models (mathematical, statistical, and AI/machine learning models) and predictor variables (vaccination rate, stringency index, and Omicron variant rate) for seven selected countries with the highest vaccination rates. We decided on our best models based on the Bayesian Information Criterion (BIC) and analyzed the significance of each predictor. Simple models were preferred. The selection of the best prediction models and the use of Omicron variant rate were considered essential in improving prediction accuracies. For the test data period before Omicron variant emergence, the selection of the best models was the most significant factor in improving prediction accuracy. For the test period after Omicron emergence, Omicron variant rate use was considered essential in deciding forecasting accuracy. For prediction models, ARIMA, lightGBM, and TSGLM generally performed well in both test periods. Linear mixed models with country as a random effect has proven that the choice of prediction models and the use of Omicron data was significant in determining forecasting accuracies for the highly vaccinated countries. Relatively simple models, fit with either prediction model or Omicron data, produced best results in enhancing forecasting accuracies with test data.


COVID-19 Vaccines , COVID-19 , Forecasting , SARS-CoV-2 , Humans , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19/virology , Forecasting/methods , SARS-CoV-2/immunology , Vaccination , Machine Learning , Pandemics/prevention & control , Health Policy , Bayes Theorem , Models, Statistical
13.
Biometrics ; 80(2)2024 Mar 27.
Article En | MEDLINE | ID: mdl-38708764

When studying the treatment effect on time-to-event outcomes, it is common that some individuals never experience failure events, which suggests that they have been cured. However, the cure status may not be observed due to censoring which makes it challenging to define treatment effects. Current methods mainly focus on estimating model parameters in various cure models, ultimately leading to a lack of causal interpretations. To address this issue, we propose 2 causal estimands, the timewise risk difference and mean survival time difference, in the always-uncured based on principal stratification as a complement to the treatment effect on cure rates. These estimands allow us to study the treatment effects on failure times in the always-uncured subpopulation. We show the identifiability using a substitutional variable for the potential cure status under ignorable treatment assignment mechanism, these 2 estimands are identifiable. We also provide estimation methods using mixture cure models. We applied our approach to an observational study that compared the leukemia-free survival rates of different transplantation types to cure acute lymphoblastic leukemia. Our proposed approach yielded insightful results that can be used to inform future treatment decisions.


Models, Statistical , Precursor Cell Lymphoblastic Leukemia-Lymphoma , Humans , Precursor Cell Lymphoblastic Leukemia-Lymphoma/mortality , Precursor Cell Lymphoblastic Leukemia-Lymphoma/therapy , Precursor Cell Lymphoblastic Leukemia-Lymphoma/drug therapy , Causality , Biometry/methods , Treatment Outcome , Computer Simulation , Disease-Free Survival , Survival Analysis
14.
Biometrics ; 80(2)2024 Mar 27.
Article En | MEDLINE | ID: mdl-38708763

Time-series data collected from a network of random variables are useful for identifying temporal pathways among the network nodes. Observed measurements may contain multiple sources of signals and noises, including Gaussian signals of interest and non-Gaussian noises, including artifacts, structured noise, and other unobserved factors (eg, genetic risk factors, disease susceptibility). Existing methods, including vector autoregression (VAR) and dynamic causal modeling do not account for unobserved non-Gaussian components. Furthermore, existing methods cannot effectively distinguish contemporaneous relationships from temporal relations. In this work, we propose a novel method to identify latent temporal pathways using time-series biomarker data collected from multiple subjects. The model adjusts for the non-Gaussian components and separates the temporal network from the contemporaneous network. Specifically, an independent component analysis (ICA) is used to extract the unobserved non-Gaussian components, and residuals are used to estimate the contemporaneous and temporal networks among the node variables based on method of moments. The algorithm is fast and can easily scale up. We derive the identifiability and the asymptotic properties of the temporal and contemporaneous networks. We demonstrate superior performance of our method by extensive simulations and an application to a study of attention-deficit/hyperactivity disorder (ADHD), where we analyze the temporal relationships between brain regional biomarkers. We find that temporal network edges were across different brain regions, while most contemporaneous network edges were bilateral between the same regions and belong to a subset of the functional connectivity network.


Algorithms , Biomarkers , Computer Simulation , Models, Statistical , Humans , Biomarkers/analysis , Normal Distribution , Attention Deficit Disorder with Hyperactivity , Time Factors , Biometry/methods
15.
PLoS One ; 19(5): e0301259, 2024.
Article En | MEDLINE | ID: mdl-38709733

Bayesian Control charts are emerging as the most efficient statistical tools for monitoring manufacturing processes and providing effective control over process variability. The Bayesian approach is particularly suitable for addressing parametric uncertainty in the manufacturing industry. In this study, we determine the monitoring threshold for the shape parameter of the Inverse Gaussian distribution (IGD) and design different exponentially-weighted-moving-average (EWMA) control charts based on different loss functions (LFs). The impact of hyperparameters is investigated on Bayes estimates (BEs) and posterior risks (PRs). The performance measures such as average run length (ARL), standard deviation of run length (SDRL), and median of run length (MRL) are employed to evaluate the suggested approach. The designed Bayesian charts are evaluated for different settings of smoothing constant of the EWMA chart, different sample sizes, and pre-specified false alarm rates. The simulative study demonstrates the effectiveness of the suggested Bayesian method-based EWMA charts as compared to the conventional classical setup-based EWMA charts. The proposed techniques of EWMA charts are highly efficient in detecting shifts in the shape parameter and outperform their classical counterpart in detecting faults quickly. The proposed technique is also applied to real-data case studies from the aerospace manufacturing industry. The quality characteristic of interest was selected as the monthly industrial production index of aircraft from January 1980 to December 2022. The real-data-based findings also validate the conclusions based on the simulative results.


Bayes Theorem , Normal Distribution , Algorithms , Humans , Models, Statistical
16.
PLoS One ; 19(5): e0303254, 2024.
Article En | MEDLINE | ID: mdl-38709776

One of the key tools to understand and reduce the spread of the SARS-CoV-2 virus is testing. The total number of tests, the number of positive tests, the number of negative tests, and the positivity rate are interconnected indicators and vary with time. To better understand the relationship between these indicators, against the background of an evolving pandemic, the association between the number of positive tests and the number of negative tests is studied using a joint modeling approach. All countries in the European Union, Switzerland, the United Kingdom, and Norway are included in the analysis. We propose a joint penalized spline model in which the penalized spline is reparameterized as a linear mixed model. The model allows for flexible trajectories by smoothing the country-specific deviations from the overall penalized spline and accounts for heteroscedasticity by allowing the autocorrelation parameters and residual variances to vary among countries. The association between the number of positive tests and the number of negative tests is derived from the joint distribution for the random intercepts and slopes. The correlation between the random intercepts and the correlation between the random slopes were both positive. This suggests that, when countries increase their testing capacity, both the number of positive tests and negative tests will increase. A significant correlation was found between the random intercepts, but the correlation between the random slopes was not significant due to a wide credible interval.


COVID-19 Testing , COVID-19 , SARS-CoV-2 , Humans , COVID-19/epidemiology , COVID-19/virology , SARS-CoV-2/isolation & purification , United Kingdom/epidemiology , COVID-19 Testing/methods , Norway/epidemiology , Models, Statistical , Switzerland/epidemiology , Pandemics , European Union
17.
Malar J ; 23(1): 133, 2024 May 03.
Article En | MEDLINE | ID: mdl-38702775

BACKGROUND: Malaria is a potentially life-threatening disease caused by Plasmodium protozoa transmitted by infected Anopheles mosquitoes. Controlled human malaria infection (CHMI) trials are used to assess the efficacy of interventions for malaria elimination. The operating characteristics of statistical methods for assessing the ability of interventions to protect individuals from malaria is uncertain in small CHMI studies. This paper presents simulation studies comparing the performance of a variety of statistical methods for assessing efficacy of intervention in CHMI trials. METHODS: Two types of CHMI designs were investigated: the commonly used single high-dose design (SHD) and the repeated low-dose design (RLD), motivated by simian immunodeficiency virus (SIV) challenge studies. In the context of SHD, the primary efficacy endpoint is typically time to infection. Using a continuous time survival model, five statistical tests for assessing the extent to which an intervention confers partial or full protection under single dose CHMI designs were evaluated. For RLD, the primary efficacy endpoint is typically the binary infection status after a specific number of challenges. A discrete time survival model was used to study the characteristics of RLD versus SHD challenge studies. RESULTS: In a SHD study with the continuous time survival model, log-rank test and t-test are the most powerful and provide more interpretable results than Wilcoxon rank-sum tests and Lachenbruch tests, while the likelihood ratio test is uniformly most powerful but requires knowledge of the underlying probability model. In the discrete time survival model setting, SHDs are more powerful for assessing the efficacy of an intervention to prevent infection than RLDs. However, additional information can be inferred from RLD challenge designs, particularly using a likelihood ratio test. CONCLUSIONS: Different statistical methods can be used to analyze controlled human malaria infection (CHMI) experiments, and the choice of method depends on the specific characteristics of the experiment, such as the sample size allocation between the control and intervention groups, and the nature of the intervention. The simulation results provide guidance for the trade off in statistical power when choosing between different statistical methods and study designs.


Malaria , Humans , Malaria/prevention & control , Animals , Research Design , Controlled Clinical Trials as Topic , Models, Statistical , Anopheles/parasitology
18.
PLoS One ; 19(5): e0289822, 2024.
Article En | MEDLINE | ID: mdl-38691561

Histograms are frequently used to perform a preliminary study of data, such as finding outliers and determining the distribution's shape. It is common knowledge that choosing an appropriate number of bins is crucial to revealing the right information. It's also well known that using bins of different widths, which called unequal bin width, is preferable to using bins of equal width if the bin width is selected carefully. However this is a much difficult issue. In this research, a novel approach to AIC for histograms with unequal bin widths was proposed. We demonstrate the advantage of the suggested approach in comparison to others using both extensive Monte Carlo simulations and empirical examples.


Monte Carlo Method , Models, Statistical , Computer Simulation , Algorithms , Humans
19.
Sci Rep ; 14(1): 10810, 2024 05 11.
Article En | MEDLINE | ID: mdl-38734768

In this study, we have presented a novel probabilistic model called the neutrosophic Burr-III distribution, designed for applications in neutrosophic surface analysis. Neutrosophic analysis allows for the incorporation of vague and imprecise information, reflecting the reality that many real-world problems involve ambiguous data. This ability to handle vagueness can lead to more robust and realistic models especially in situation where classical models fall short. We have also explored the neutrosophic Burr-III distribution in order to deal with the ambiguity and vagueness in the data where the classical Burr-III distribution falls short. This distribution offers valuable insights into various reliability properties, moment expressions, order statistics, and entropy measures, making it a versatile tool for analyzing complex data. To assess the practical relevance of our proposed distribution, we applied it to real-world data sets and compared its performance against the classical Burr-III distribution. The findings revealed that the neutrosophic Burr-III distribution outperformed than the classical Burr-III distribution in capturing the underlying data characteristics, highlighting its potential as a superior modeling toolin various fields.


COVID-19 , Models, Statistical , COVID-19/epidemiology , COVID-19/virology , Humans , SARS-CoV-2/isolation & purification
20.
Stat Appl Genet Mol Biol ; 23(1)2024 Jan 01.
Article En | MEDLINE | ID: mdl-38736398

Longitudinal time-to-event analysis is a statistical method to analyze data where covariates are measured repeatedly. In survival studies, the risk for an event is estimated using Cox-proportional hazard model or extended Cox-model for exogenous time-dependent covariates. However, these models are inappropriate for endogenous time-dependent covariates like longitudinally measured biomarkers, Carcinoembryonic Antigen (CEA). Joint models that can simultaneously model the longitudinal covariates and time-to-event data have been proposed as an alternative. The present study highlights the importance of choosing the baseline hazards to get more accurate risk estimation. The study used colon cancer patient data to illustrate and compare four different joint models which differs based on the choice of baseline hazards [piecewise-constant Gauss-Hermite (GH), piecewise-constant pseudo-adaptive GH, Weibull Accelerated Failure time model with GH & B-spline GH]. We conducted simulation study to assess the model consistency with varying sample size (N = 100, 250, 500) and censoring (20 %, 50 %, 70 %) proportions. In colon cancer patient data, based on Akaike information criteria (AIC) and Bayesian information criteria (BIC), piecewise-constant pseudo-adaptive GH was found to be the best fitted model. Despite differences in model fit, the hazards obtained from the four models were similar. The study identified composite stage as a prognostic factor for time-to-event and the longitudinal outcome, CEA as a dynamic predictor for overall survival in colon cancer patients. Based on the simulation study Piecewise-PH-aGH was found to be the best model with least AIC and BIC values, and highest coverage probability(CP). While the Bias, and RMSE for all the models showed a competitive performance. However, Piecewise-PH-aGH has shown least bias and RMSE in most of the combinations and has taken the shortest computation time, which shows its computational efficiency. This study is the first of its kind to discuss on the choice of baseline hazards.


Colonic Neoplasms , Proportional Hazards Models , Humans , Longitudinal Studies , Colonic Neoplasms/mortality , Colonic Neoplasms/genetics , Survival Analysis , Computer Simulation , Models, Statistical , Bayes Theorem , Carcinoembryonic Antigen/blood
...